digital signature
Governable AI: Provable Safety Under Extreme Threat Models
Wang, Donglin, Liang, Weiyun, Chen, Chunyuan, Xu, Jing, Fu, Yulong
As AI rapidly advances, the security risks posed by AI are becoming increasingly severe, especially in critical scenarios, including those posing existential risks. If AI becomes uncontrollable, manipulated, or actively evades safety mechanisms, it could trigger systemic disasters. Existing AI safety approaches-such as model enhancement, value alignment, and human intervention-suffer from fundamental, in-principle limitations when facing AI with extreme motivations and unlimited intelligence, and cannot guarantee security. To address this challenge, we propose a Governable AI (GAI) framework that shifts from traditional internal constraints to externally enforced structural compliance based on cryptographic mechanisms that are computationally infeasible to break, even for future AI, under the defined threat model and well-established cryptographic assumptions.The GAI framework is composed of a simple yet reliable, fully deterministic, powerful, flexible, and general-purpose rule enforcement module (REM); governance rules; and a governable secure super-platform (GSSP) that offers end-to-end protection against compromise or subversion by AI. The decoupling of the governance rules and the technical platform further enables a feasible and generalizable technical pathway for the safety governance of AI. REM enforces the bottom line defined by governance rules, while GSSP ensures non-bypassability, tamper-resistance, and unforgeability to eliminate all identified attack vectors. This paper also presents a rigorous formal proof of the security properties of this mechanism and demonstrates its effectiveness through a prototype implementation evaluated in representative high-stakes scenarios.
- North America > United States (0.28)
- Asia > China > Beijing > Beijing (0.04)
- Europe (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- (2 more...)
ChainMarks: Securing DNN Watermark with Cryptographic Chain
Choi, Brian, Wang, Shu, Choi, Isabelle, Sun, Kun
With the widespread deployment of deep neural network (DNN) models, dynamic watermarking techniques are being used to protect the intellectual property of model owners. However, recent studies have shown that existing watermarking schemes are vulnerable to watermark removal and ambiguity attacks. Besides, the vague criteria for determining watermark presence further increase the likelihood of such attacks. In this paper, we propose a secure DNN watermarking scheme named ChainMarks, which generates secure and robust watermarks by introducing a cryptographic chain into the trigger inputs and utilizes a two-phase Monte Carlo method for determining watermark presence. First, ChainMarks generates trigger inputs as a watermark dataset by repeatedly applying a hash function over a secret key, where the target labels associated with trigger inputs are generated from the digital signature of model owner. Then, the watermarked model is produced by training a DNN over both the original and watermark datasets. To verify watermarks, we compare the predicted labels of trigger inputs with the target labels and determine ownership with a more accurate decision threshold that considers the classification probability of specific models. Experimental results show that ChainMarks exhibits higher levels of robustness and security compared to state-of-the-art watermarking schemes. With a better marginal utility, ChainMarks provides a higher probability guarantee of watermark presence in DNN models with the same level of watermark accuracy.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > Vietnam > Hanoi > Hanoi (0.05)
- (3 more...)
FedSOV: Federated Model Secure Ownership Verification with Unforgeable Signature
Yang, Wenyuan, Zhu, Gongxi, Yin, Yuguo, Gu, Hanlin, Fan, Lixin, Yang, Qiang, Cao, Xiaochun
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data. The high cost of training and the significant value of the global model necessitates the need for ownership verification of federated learning. However, the existing ownership verification schemes in federated learning suffer from several limitations, such as inadequate support for a large number of clients and vulnerability to ambiguity attacks. To address these limitations, we propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV. FedSOV allows numerous clients to embed their ownership credentials and verify ownership using unforgeable digital signatures. The scheme provides theoretical resistance to ambiguity attacks with the unforgeability of the signature. Experimental results on computer vision and natural language processing tasks demonstrate that FedSOV is an effective federated model ownership verification scheme enhanced with provable cryptographic security.
WordSig: QR streams enabling platform-independent self-identification that's impossible to deepfake
Deepfakes can degrade the fabric of society by limiting our ability to trust video content from leaders, authorities, and even friends. Cryptographically secure digital signatures may be used by video streaming platforms to endorse content, but these signatures are applied by the content distributor rather than the participants in the video. We introduce WordSig, a simple protocol allowing video participants to digitally sign the words they speak using a stream of QR codes, and allowing viewers to verify the consistency of signatures across videos. This allows establishing a trusted connection between the viewer and the participant that is not mediated by the content distributor. Given the widespread adoption of QR codes for distributing hyperlinks and vaccination records, and the increasing prevalence of celebrity deepfakes, 2022 or later may be a good time for public figures to begin using and promoting QR-based self-authentication tools.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.54)
Machine learning has an alarming threat: undetectable backdoors
This article is part of our coverage of the latest in AI research. If an adversary gives you a machine learning model and secretly plants a malicious backdoor in it, what are the chances that you can discover it? The security of machine learning is becoming increasingly critical as ML models find their way into a growing number of applications. The new study focuses on the security threats of delegating the training and development of machine learning models to third parties and service providers. With the shortage of AI talent and resources, many organizations are outsourcing their machine learning work, using pre-trained models or online ML services.
Machine learning has a backdoor problem
This article is part of our coverage of the latest in AI research. If an adversary gives you a machine learning model and secretly plants a malicious backdoor in it, what are the chances that you can discover it? The security of machine learning is becoming increasingly critical as ML models find their way into a growing number of applications. The new study focuses on the security threats of delegating the training and development of machine learning models to third parties and service providers. With the shortage of AI talent and resources, many organizations are outsourcing their machine learning work, using pre-trained models or online ML services.
Top 10 Software Development Trends In 2022
Software development is not a static process, but, is in fact, a dynamic one. Historically, the world witnessed the development of the information system from the period between 1940 to 1960 and then came the idea of project management. Did you know that Henry Laurence Gantt and Frederick Winslow Taylor first proposed the concept of project management in 1910? Software products continuously need to evolve with time as consumer expectations keep changing. To adapt to these changes and manage to remain in demand, constant evolution is what can confer them a competitive edge.
Cyber-Hardening Autonomous Vehicles with the "Big Four"
Modern vehicles are increasingly complicated systems dependent on an array of computing hardware and software. To meet the growing demand for increased safety and new features, manufacturers are using more sophisticated in-vehicle networks to connect more Electronic Control Units (ECU) and actuators. Autonomous vehicles are even more complicated, creating additional possible attack angles for hackers and a greater potential for security vulnerabilities. A security breach could cause malfunction or unexpected behavior of ECUs and the vehicle as a whole, which lead to damages from reputational to serious safety accidents. One of Motional's core values is: "Safety as our bedrock." To practice this, we reduce cybersecurity vulnerabilities through system hardening; or reducing our attack surface and introducing fundamental security mechanisms to mitigate threats.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
- Information Technology > Communications > Networks (0.90)
How Artificial Intelligence Helps Stop Digital Signature Forgery
Nowadays, one cannot deny the prominence of paperless transactions, including the signing of documents. With the advent of electronic signatures, there's been a spate of cybercrime, and the forgery of digital signatures has become one serious crime of which companies should be aware. However, it's also safe to say it's possible to counter the threats of forgery, thanks to artificial intelligence (AI). Artificial intelligence is a division of computer science that involves building machines and computer systems which perform tasks that normally require human intelligence. In other words, AI is the simulation of human intelligence by machines.
Quantum computers could crack today's encrypted messages. That's a problem
Google plans to make million-qubit quantum computers by 2029 that are much more powerful than this system it showed in 2019. Quantum computers, if they mature enough, will be able to crack much of today's encryption. That'll lay bare private communications, company data and military secrets. Today's quantum computers are far too primitive to do so. But data surreptitiously gathered now could still be sensitive when more powerful quantum computers come online in a few years.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.35)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence (1.00)